EgoCom: A Multi-person Multi-modal Egocentric Communications Dataset
نویسندگان
چکیده
منابع مشابه
Multi - Modal Person
This paper deals with the elements of a multi-modal person authentication systems. Test procedures for evaluating machine experts as well as machine supervisors based on leave-one-out principle are described. Two independent machine experts on person authentication are presented along with their individual performances. These experts consisted of a face (Gabor features) and a speaker (LPC featu...
متن کاملA multi-subject, multi-modal human neuroimaging dataset
We describe data acquired with multiple functional and structural neuroimaging modalities on the same nineteen healthy volunteers. The functional data include Electroencephalography (EEG), Magnetoencephalography (MEG) and functional Magnetic Resonance Imaging (fMRI) data, recorded while the volunteers performed multiple runs of hundreds of trials of a simple perceptual task on pictures of famil...
متن کاملThe WILDTRACK Multi-Camera Person Dataset
People detection methods are highly sensitive to the perpetual occlusions among the targets. As multi-camera set-ups become more frequently encountered, joint exploitation of the across views information would allow for improved detection performances. We provide a large-scale HD dataset named WILDTRACK which finally makes advanced deep learning methods applicable to this problem. The seven-sta...
متن کاملSIC_DB: Multi-Modal Database for Person Authentication
This paper presents a multi-modal database intended for person authentication from multiple cues. It currently contains three sessions of the same 120 individuals, providing profile and frontal color images, 3-D facial representations and many french and some english speech utterances. People were selected for their availability so that new sessions will be easily acquired. Individual recogniti...
متن کاملMulti-modal Person Recognition for Vehicular Applications
In this paper, we present biometric person recognition experiments in a real-world car environment using speech, face, and driving signals. We have performed experiments on a subset of the in-car CIAIR corpus collected at the Nagoya University, Japan. We have used Mel-frequency cepstral coefficients (MFCC) for speaker recognition. For face recognition, we have reduced the feature dimension of e...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Pattern Analysis and Machine Intelligence
سال: 2020
ISSN: 0162-8828,2160-9292,1939-3539
DOI: 10.1109/tpami.2020.3025105